32 research outputs found

    Integrated Decision Support System – iDSS for Library Holistic Evaluation

    Get PDF
    The decision-making process in academic libraries is paramount; however highly complicated due to the large number of data sources, processes and high volumes of data to be analyzed. Academic libraries are accustomed to producing and gathering a vast amount of statistics about their collection and services. Typical data sources include integrated library systems, library portals and online catalogues, systems of consortiums, quality surveys and university management. Unfortunately, these heterogeneous data sources are only partially used for decision-making processes due to the wide variety of formats, standards and technologies, as well as the lack of efficient methods of integration. This article presents the analysis and design of an integrated decision support system for an academic library. Firstly, a holistic approach documented in a previous study is used for data collection. This holistic approach incorporates key elements including process analysis, quality estimation, information relevance and user interaction that may influence a library’s decision. Based on the mentioned approach above, this study defines a set of queries of interest to be issued against the integrated system proposed. Then, relevant data sources, formats and connectivity requirements for a particular example are identified. Next, data warehouse architecture is proposed to integrate, process, and store the collected data transparently. Eventually, the stored data are analyzed through reporting techniques such as on-line analytical processing tools. By doing so, the article provides the design of an integrated solution that assists library managers to make tactical decisions about the optimal use and leverage of their resources and services

    4to. Congreso Internacional de Ciencia, Tecnología e Innovación para la Sociedad. Memoria académica

    Get PDF
    Este volumen acoge la memoria académica de la Cuarta edición del Congreso Internacional de Ciencia, Tecnología e Innovación para la Sociedad, CITIS 2017, desarrollado entre el 29 de noviembre y el 1 de diciembre de 2017 y organizado por la Universidad Politécnica Salesiana (UPS) en su sede de Guayaquil. El Congreso ofreció un espacio para la presentación, difusión e intercambio de importantes investigaciones nacionales e internacionales ante la comunidad universitaria que se dio cita en el encuentro. El uso de herramientas tecnológicas para la gestión de los trabajos de investigación como la plataforma Open Conference Systems y la web de presentación del Congreso http://citis.blog.ups.edu.ec/, hicieron de CITIS 2017 un verdadero referente entre los congresos que se desarrollaron en el país. La preocupación de nuestra Universidad, de presentar espacios que ayuden a generar nuevos y mejores cambios en la dimensión humana y social de nuestro entorno, hace que se persiga en cada edición del evento la presentación de trabajos con calidad creciente en cuanto a su producción científica. Quienes estuvimos al frente de la organización, dejamos plasmado en estas memorias académicas el intenso y prolífico trabajo de los días de realización del Congreso Internacional de Ciencia, Tecnología e Innovación para la Sociedad al alcance de todos y todas

    Building a Knowledge Graph from Historical Newspapers: A Study Case in Ecuador

    No full text
    History shows that different events occur every day in the world. In the past, knowledge of these events could only be orally transmitted from generation to generation due to a lack of appropriate technology. Currently, vast amounts of valuable historical information rests in deteriorated historical newspapers, which result very difficult to deal with. In this work, we use text digitization, text mining, and Semantic Web technologies to generate a knowledge graph comprising events occurred in Ecuador in the XIX-XX centuries.Cuenc

    Design of an integrated Decision Support System for library holistic evaluation

    No full text
    The decision-making process in academic libraries is paramount; however highly complicated due to the large number of data sources, processes and high volumes of data to be analyzed. Academic libraries are accustomed to producing and gathering a vast amount of statistics about their collection and services. Typical data sources include integrated library systems, library portals and online catalogues, systems of consortiums, quality surveys and university management. Unfortunately, these heterogeneous data sources are only partially used for decision-making processes due to the wide variety of formats, standards and technologies, as well as the lack of efficient methods of integration. This article presents the analysis and design of an integrated decision support system for an academic library. Firstly, a holistic approach documented in a previous study is used for data collection. This holistic approach incorporates key elements including process analysis, quality estimation, information relevance and user interaction that may influence a library’s decision. Based on the mentioned approach above, this study defines a set of queries of interest to be issued against the integrated system proposed. Then, relevant data sources, formats and connectivity requirements for a particular example are identified. Next, data warehouse architecture is proposed to integrate, process, and store the collected data transparently. Eventually, the stored data are analyzed through reporting techniques such as on-line analytical processing tools. By doing so, the article provides the design of an integrated solution that assists library managers to make tactical decisions about the optimal use and leverage of their resources and services.no isbnstatus: publishe

    IT governance for the University of Cuenca

    No full text
    Information Technology Governance in Latin America is still in process. The University of Cuenca faces new challenges due to its institutional growth and the development of teaching and research, in which the institution as a whole does not obtain a significant contribution of information technologies. To solve this problem, the University of Cuenca proposes a Governance Model of Information Technologies based on Cobit's processes, which involve important changes in the role that technology plays in organizations, the prioritization of their strategies and governance components, the work model, governance and management structures, processes and corporate culture management.Bogot

    EDA and a tailored data imputation algorithm for daily ozone concentrations

    No full text
    La contaminación atmosférica es un problema ambiental crítico con efectos perjudiciales para la salud humana que está afectando a todas las regiones del mundo, especialmente a las ciudades de bajos ingresos, donde se han alcanzado niveles críticos. La contaminación atmosférica tiene un papel directo en la salud pública, el cambio climático y la economía mundial. Las acciones eficaces para mitigar la contaminación atmosférica, por ejemplo, la investigación y la toma de decisiones, requieren la disponibilidad de observaciones de alta resolución. Esto ha motivado la aparición de nuevas tecnologías de sensores de bajo costo, que tienen el potencial de proporcionar datos de alta resolución gracias a sus precios accesibles. Sin embargo, dado que los sensores de bajo costo se construyen con materiales relativamente de bajo costo, tienden a ser poco confiables. Es decir, las mediciones de sensores de bajo costo son propensas a errores, huecos, sesgos y ruidos. Todos estos problemas deben resolverse antes de que los datos puedan utilizarse para apoyar la investigación o la toma de decisiones. En este documento, abordamos el problema de la imputación de datos en un conjunto diario de datos de contaminación atmosférica con brechas relativamente pequeñas. Nuestras principales contribuciones son: 1) un conjunto de datos de contaminación atmosférica compuesto por varias concentraciones de contaminación atmosférica, incluidos los gases criterios y trece covariables meteorológicas; y (2) un algoritmo personalizado para la imputación de datos de las concentraciones diarias de ozono basadas en una superficie de tendencia y un proceso gaussiano. Las técnicas de visualización de datos se utilizaron ampliamente a lo largo de este trabajo, ya que son herramientas útiles para comprender la multidimensionalidad de los datos de sensores referenciados a puntos.Air pollution is a critical environmental problem with detrimental effects on human health that is affecting all regions in the world, especially to low-income cities, where critical levels have been reached. Air pollution has a direct role in public health, climate change, and worldwide economy. Effective actions to mitigate air pollution, e.g. research and decision making, require of the availability of high resolution observations. This has motivated the emergence of new low-cost sensor technologies, which have the potential to provide high resolution data thanks to their accessible prices. However, since low-cost sensors are built with relatively low-cost materials, they tend to be unreliable. That is, measurements from low-cost sensors are prone to errors, gaps, bias and noise. All these problems need to be solved before the data can be used to support research or decision making. In this paper, we address the problem of data imputation on a daily air pollution data set with relatively small gaps. Our main contributions are: (1) an air pollution data set composed by several air pollution concentrations including criteria gases and thirteen meteorological covariates; and (2) a custom algorithm for data imputation of daily ozone concentrations based on a trend surface and a Gaussian Process. Data Visualization techniques were extensively used along this work, as they are useful tools for understanding the multi-dimensionality of point-referenced sensor data.Riobamb

    Monitoring of system memory usage embedded in FPGA

    Get PDF
    At this moment in the field of FPGA, only RAM tests have been carried out to evaluate its performance but these works have not focused on tracking memory usage in real time, this paper proposes a design for monitoring the memory of an embedded system, in the logical part, making use of the communication between the FPGA and the HPS. In addition, the HPS has implemented a web service that allows to visualize a graph of the monitoring in real time. The proposed design can be an introduction to the development of applications that can be specifically monitored for a component of the embedded system in FPGA, because FPGA is currently being used for different purposes such as machine learning, real-time image processing, mining of Bitcoins, among others. These applications are quite robust, which implies a high demand for processing for the embedded system

    Ensuring traceability and Orchestration in the food supply chain

    No full text
    Ensure traceability and orchestration of the participants in the food supply chain can help improve the food production and reduce the distribution of unsafe or low-quality products. This article provides some insights about the design of a high-level architecture to support a semantic blockchain platform that ensures traceability and orchestration of the food systems. The design involves two layers that cover: (a) the decentralized orchestration of the participants, (b) the semantic modeling of the data and processes involved, and (c) the storage and integrity of the data. To deal with the platform design, it is analyzed the operation and attributes of the food supply chain management and it is discussed how the combination of Semantic and Blockchain technologies can address the platform features.Quit

    Digital repositories and linked data: lessons learned and challenges

    No full text
    Los repositorios digitales han sido utilizados por Universidades y Bibliotecas para almacenar sus contenidos bibliográficos, científicos y / o institucionales, y luego poner a disposición del público sus metadatos correspondientes en la web y a través del protocolo OAI-PMH. Sin embargo, estos metadatos no son lo suficientemente descriptivos para que un documento sea fácilmente detectable. Si bien el surgimiento de las tecnologías de Web Semántica ha suscitado el interés de los proveedores de Repositorios Digitales por publicar y enriquecer su contenido utilizando tecnologías de Datos Vinculados (LD), esas instituciones han utilizado enfoques de generación diferente y, en ciertos casos, soluciones ad-hoc para resolver usos particulares. casos, pero ninguno de ellos ha realizado una comparación entre los enfoques existentes para demostrar cuál es la mejor solución antes de su aplicación. Para abordar esta pregunta, Hemos realizado un estudio de referencia que compara dos enfoques de generación de uso común y también describe nuestra experiencia, lecciones aprendidas y desafíos encontrados durante el proceso de publicación de un repositorio digital DSpace como LD. Los resultados muestran que el método sencillo para extraer datos de un repositorio digital es a través del protocolo estándar OAI-PMH, cuyo rendimiento en términos de tiempo de ejecución es mucho más corto que el enfoque de base de datos, mientras que las tareas adicionales de limpieza de datos son mínimas. © 2019, Springer Nature Switzerland AG. Los resultados muestran que el método sencillo para extraer datos de un repositorio digital es a través del protocolo estándar OAI-PMH, cuyo rendimiento en términos de tiempo de ejecución es mucho más corto que el enfoque de base de datos, mientras que las tareas adicionales de limpieza de datos son mínimas. © 2019, Springer Nature Switzerland AG. Los resultados muestran que el método sencillo para extraer datos de un repositorio digital es a través del protocolo estándar OAI-PMH, cuyo rendimiento en términos de tiempo de ejecución es mucho más corto que el enfoque de base de datos, mientras que las tareas adicionales de limpieza de datos son mínimas.Digital repositories have been used by Universities and Libraries to store their bibliographic, scientific, and/or institutional contents, and then make their corresponding metadata publicly available to the web and through the OAI-PMH protocol. However, such metadata is not descriptive enough for a document to be easily discoverable. Even though the emergence of Semantic Web technologies have produced the interest of Digital Repository providers to publish and enrich their content using Linked Data (LD) technologies, those institutions have used different generation approaches, and in certain cases ad-hoc solutions to solve particular use cases, but none of them has performed a comparison between existing approaches in order to demonstrate which one is the best solution prior to its application. In order to address this question, we have performed a benchmark study that compares two commonly used generation approaches, and also describes our experience, lessons learned and challenges found during the process of publishing a DSpace digital repository as LD. Results show that the straightforward method for extracting data from a digital repository is through the standard OAI-PMH protocol, whose performance in terms of execution time is much shorter than the database approach, while additional data cleaning tasks are minimal.Cayo Santa Marí

    Land cover classification of high resolution images from an ecuadorian andean zone using deep convolutional neural networks and transfer learning

    No full text
    Recientemente, han surgido modelos de aprendizaje profundo o Deep Learning como un método popular para aplicar modelos de aprendizaje automático en una variedad de dominios, como en la percepción remota en donde se han propuesto diferentes enfoques para la clasificación de cobertura y uso del suelo. Sin embargo, la disponibilidad de conjuntos de datos suficientemente grande con muestras etiquetadas, dificulta el entrenamiento de dichos modelos, esto conlleva a obtener modelos sub óptimos que no son capaces de generalizar correctamente los diferentes tipos de cobertura del suelo. Este escenario sucede a menudo por lo que es considerado como un desafío importante que debe abordarse. En este artículo, se presenta un enfoque para realizar clasificación de cobertura del suelo a partir de un pequeño conjunto de datos de imágenes de alta resolución espacial perteneciente a una área en los Andes de Ecuador, se utiliza redes neuronales convolucionales profundas y técnicas como: aprendizaje por transferencia, aumento de datos, entre otros ajustes a los parámetros del modelo. Los resultados demostraron que este método es capaz de alcanzar una buena precisión de clasificación si está respaldado por buenas estrategias para aumentar el número de muestras en un conjunto de datos desequilibrado.Different deep learning models have recently emerged as a popular method to apply machine learning in a variety of domains including remote sensing, where several approaches for the classification of land cover and use have been proposed. However, acquiring a suitably large data set with labelled samples for training such models is often a significant challenge to tackle, that leads to suboptimal models not being able to generalize well over different types of land cover. In this paper, we present an approach to perform land cover classification on a small dataset of high-resolution imagery from an area in the Andes of Ecuador using deep convolutional neural networks and techniques such as transfer learning, data augmentation, and some finetuning considerations. Results demonstrated that this method can achieve good classification accuracies if it is backed with good strategies to increase the number of samples in an imbalanced dataset
    corecore